Masked image modeling (MIM) has shown great promise for self-supervised learning (SSL) yet been criticized for learning inefficiency. We believe the insufficient utilization of training signals should be responsible. To alleviate this issue, we introduce a conceptually simple yet learning-efficient MIM training scheme, termed Disjoint Masking with Joint Distillation (DMJD). For disjoint masking (DM), we sequentially sample multiple masked views per image in a mini-batch with the disjoint regulation to raise the usage of tokens for reconstruction in each image while keeping the masking rate of each view. For joint distillation (JD), we adopt a dual branch architecture to respectively predict invisible (masked) and visible (unmasked) tokens with superior learning targets. Rooting in orthogonal perspectives for training efficiency improvement, DM and JD cooperatively accelerate the training convergence yet not sacrificing the model generalization ability. Concretely, DM can train ViT with half of the effective training epochs (3.7 times less time-consuming) to report competitive performance. With JD, our DMJD clearly improves the linear probing classification accuracy over ConvMAE by 5.8%. On fine-grained downstream tasks like semantic segmentation, object detection, etc., our DMJD also presents superior generalization compared with state-of-the-art SSL methods. The code and model will be made public at https://github.com/mx-mark/DMJD.
translated by 谷歌翻译
With the development of natural language processing techniques(NLP), automatic diagnosis of eye diseases using ophthalmology electronic medical records (OEMR) has become possible. It aims to evaluate the condition of both eyes of a patient respectively, and we formulate it as a particular multi-label classification task in this paper. Although there are a few related studies in other diseases, automatic diagnosis of eye diseases exhibits unique characteristics. First, descriptions of both eyes are mixed up in OEMR documents, with both free text and templated asymptomatic descriptions, resulting in sparsity and clutter of information. Second, OEMR documents contain multiple parts of descriptions and have long document lengths. Third, it is critical to provide explainability to the disease diagnosis model. To overcome those challenges, we present an effective automatic eye disease diagnosis framework, NEEDED. In this framework, a preprocessing module is integrated to improve the density and quality of information. Then, we design a hierarchical transformer structure for learning the contextualized representations of each sentence in the OEMR document. For the diagnosis part, we propose an attention-based predictor that enables traceable diagnosis by obtaining disease-specific information. Experiments on the real dataset and comparison with several baseline models show the advantage and explainability of our framework.
translated by 谷歌翻译
Researches of analysis and parsing around fa\c{c}ades to enrich the 3D feature of fa\c{c}ade models by semantic information raised some attention in the community, whose main idea is to generate higher resolution components with similar shapes and textures to increase the overall resolution at the expense of reconstruction accuracy. While this approach works well for components like windows and doors, there is no solution for fa\c{c}ade background at present. In this paper, we introduce the concept of representative region texture, which can be used in the above modeling approach by tiling the representative texture around the fa\c{c}ade region, and propose a semi-supervised way to do representative region texture extraction from a fa\c{c}ade image. Our method does not require any additional labelled data to train as long as the semantic information is given, while a traditional end-to-end model requires plenty of data to increase its performance. Our method can extract texture from any repetitive images, not just fa\c{c}ade, which is not capable in an end-to-end model as it relies on the distribution of training set. Clustering with weighted distance is introduced to further increase the robustness to noise or an imprecise segmentation, and make the extracted texture have a higher resolution and more suitable for tiling. We verify our method on various fa\c{c}ade images, and the result shows our method has a significant performance improvement compared to only a random crop on fa\c{c}ade. We also demonstrate some application scenarios and proposed a fa\c{c}ade modeling workflow with the representative region texture, which has a better visual resolution for a regular fa\c{c}ade.
translated by 谷歌翻译
Lifelong person re-identification (LReID) is in significant demand for real-world development as a large amount of ReID data is captured from diverse locations over time and cannot be accessed at once inherently. However, a key challenge for LReID is how to incrementally preserve old knowledge and gradually add new capabilities to the system. Unlike most existing LReID methods, which mainly focus on dealing with catastrophic forgetting, our focus is on a more challenging problem, which is, not only trying to reduce the forgetting on old tasks but also aiming to improve the model performance on both new and old tasks during the lifelong learning process. Inspired by the biological process of human cognition where the somatosensory neocortex and the hippocampus work together in memory consolidation, we formulated a model called Knowledge Refreshing and Consolidation (KRC) that achieves both positive forward and backward transfer. More specifically, a knowledge refreshing scheme is incorporated with the knowledge rehearsal mechanism to enable bi-directional knowledge transfer by introducing a dynamic memory model and an adaptive working model. Moreover, a knowledge consolidation scheme operating on the dual space further improves model stability over the long term. Extensive evaluations show KRC's superiority over the state-of-the-art LReID methods on challenging pedestrian benchmarks.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
语言的演变遵循逐渐变化的规则。语法,词汇和词汇语义转移会随着时间的推移而发生,导致了直觉的语言差距。因此,用不同的时代语言编写了大量文本,这为自然语言处理任务(例如单词分割和机器翻译)造成了障碍。尽管中文历史悠久,但以前的中国自然语言处理研究主要集中在特定时代的任务上。因此,我们为中文单词分割(CWS)提出了一个跨时代的学习框架,该框架使用开关记忆(SM)模块来合并ERA特定的语言知识。来自不同时代的四个语料库的实验表明,每个语料库的性能都显着提高。进一步的分析还表明,SM可以有效地将时代的知识整合到神经网络中。
translated by 谷歌翻译
预处理一直是优化和机器学习方面的主食技术。它通常会减少其应用于矩阵的条件数,从而加快优化算法的收敛性。尽管实践中有许多流行的预处理技术,但大多数人缺乏降低病数的理论保证。在本文中,我们研究了最佳对角线预处理的问题,以分别或同时分别或同时缩放其行或列来实现任何全级矩阵的条件数量的最大降低。我们首先将问题重新将问题重新制定为一个准凸出问题,并提供了一种基线一分配算法,该算法在实践中易于实现,其中每次迭代都包含SDP可行性问题。然后,我们建议使用$ o(\ log(\ frac {1} {\ epsilon})))$迭代复杂度提出多项式时间潜在的降低算法,其中每个迭代均由基于Nesterov-todd方向的牛顿更新组成。我们的算法基于该问题的表述,该问题是von Neumann最佳生长问题的广义版本。接下来,我们专注于单方面的最佳对角线预处理问题,并证明它们可以作为标准双SDP问题配方,我们应用了有效的定制求解器并研究我们最佳的对角线预处理的经验性能。我们在大型矩阵上进行的广泛实验表明,与基于启发式的预处理相比,最佳对角线预处理在减少条件数方面的实际吸引力。
translated by 谷歌翻译
对话场景是语音处理技术最重要,最具挑战性的场景之一,因为对话中的人们以随意的方式相互反应。在对话中检测每个人的语音活动对于下游任务,例如自然语言处理,机器翻译等。人们指的是“何时说话”作为说话者诊断(SD)的检测技术。传统上,诊断错误率(DER)长期以来一直用作SD系统的标准评估度量。但是,der没有给简短的对话短语提供足够的重视,这在语义层面上很重要。此外,在语音社区中,仍然无法使用精心准确的手动测试数据集,适合评估对话性SD技术。在本文中,我们设计和描述了对话式短语扬声器诊断(CSSD)任务,该任务包括培训和测试数据集,评估指标和基线。在数据集方面,尽管先前开源的180小时对话魔术Data-RAMC数据集,但我们还准备了一个20小时的对话演讲测试数据集,并精心验证了CSSD任务的时间戳注释。在度量方面,我们设计了新的对话der(CDER)评估度量,该评估度量计算出语音级别的SD准确性。在基线方面,我们采用了一种常用的方法:变异贝叶斯HMM X-vector系统,作为CSSD任务的基线。我们的评估指标可在https://github.com/speechclub/cder_metric上公开获得。
translated by 谷歌翻译
近年来,使用正交矩阵已被证明是通过训练,稳定性和收敛尤其是控制梯度来改善复发性神经网络(RNN)的一种有希望的方法。通过使用各种门和记忆单元,封闭的复发单元(GRU)和长期短期记忆(LSTM)体系结构解决了消失的梯度问题,但它们仍然容易出现爆炸梯度问题。在这项工作中,我们分析了GRU中的梯度,并提出了正交矩阵的使用,以防止梯度问题爆炸并增强长期记忆。我们研究了在哪里使用正交矩阵,并提出了基于Neumann系列的缩放尺度的Cayley转换,以训练GRU中的正交矩阵,我们称之为Neumann-cayley Orthoconal orthoconal Gru或简单的NC-GRU。我们介绍了有关几个合成和现实世界任务的模型的详细实验,这些实验表明NC-GRU明显优于GRU以及其他几个RNN。
translated by 谷歌翻译
最近的顺序推荐模型越来越多地依赖连续的短期用户相互作用序列来建模用户兴趣。但是,这些方法引起了人们对短期和长期利益的关注。 (1){\ IT短期}:交互序列可能不是由单一的兴趣引起的,而是来自几个相互交织的利益,即使在短时间内,也导致了它们无法模拟Skip行为的失败; (2){\ it长期}:相互作用序列主要是在离散的间隔内稀疏观察,而不是长期连续的。这使得难以推断长期利益,因为只能考虑到跨序列的利益动态,因此只能得出离散的利息表示。在这项研究中,我们通过学习来解决这些问题(1)短期利益的多尺度表示; (2)长期利益的动态意识表示。为此,我们提出了一个\ textbf {i} nterest \ textbf {d} ynamics建模框架,使用生成\ textbf {n} eural \ textbf {p textbf {p} rocesses,coincined IDNP,以从功能角度来看,以模拟用户兴趣。 IDNP学习了一个全球兴趣函数家族,以定义每个用户的长期兴趣作为功能实例化,从而通过功能连续性表现出兴趣动态。具体而言,IDNP首先将每个用户的短期交互编码为多尺度表示,然后将其汇总为用户上下文。通过将潜在的全球兴趣与用户上下文相结合,IDNP然后重建长期用户兴趣功能,并在即将到来的查询时间段上预测交互。此外,即使相互作用序列受到限制和非连续性,IDNP也可以建模此类兴趣功能。在四个现实世界数据集上进行的广泛实验表明,我们的模型在各种评估指标上的最先进。
translated by 谷歌翻译